19 - Artificial Intelligence II [ID:47310]
50 von 415 angezeigt

Okay.

Whoa, this is loud.

Um.

Nah.

Okay.

Let's just leave it at that.

Um.

Hello everybody.

Um.

Again, Professor Kohlhase is not here today.

It's not entirely clear whether he will manage to be here tomorrow.

He probably will be if not.

I will have to take over again tomorrow.

Okay.

The current topic is logic in learning.

We have a recap over the most important things starting with a logical formula,

a formula formulation of things.

Um, so unlike all of the statistical stuff you've been doing over the last couple of weeks,

uh, now we're going to do things in a more strict way.

I.e. the basic idea is we just try to express everything in terms of logical formula.

That means, um, the examples that we have, the input output will be

Boolean values and or logical formula.

Um, the stuff that we try to learn will be, um, explanations in terms of logical formula,

and so on and so forth.

Um, one key aspect when it comes to this is that we need to, um,

or can, to some extent, use prior knowledge, um,

which is important for these kinds of topics.

Um, the core idea is that,

given some description of some example that we have in our data set, um,

as a logical formula or a set of logical formulas D, um,

the classification that we want to, um, classify that particular example by C,

should be logically entailed by the description and some hypothesis that we want to learn.

Um, that's what we call the explanation constraint.

So in a certain sense, the hypothesis aims to explain why, um,

the data is being classified the way that it is.

Um, the advantage in comparison to all the statistical, um,

manners should be obvious.

We actually have an explanation for why we get what we get, um,

other than just, I don't know, like, some label and some probability estimate or whatever.

So, um, obviously there is a very trivial way to solve this problem,

we just set the hypothesis, hypothesis as the classification itself.

Then this is trivially true.

Um, obviously that's not what we want.

So, uh, we will introduce certain constraints that make sure that our hypothesis

is not just the particular classification on the particular sample that we're looking at.

Um, in the case of, um, the nice thing is that if you remember a decision trees,

um, we can extract the kinds of logical formulations of our examples,

i.e. the data and the explanations themselves, um, directly from the decision trees,

um, just to get to some intuition of how we would express that kind of stuff in the first place.

So if we take the classic restaurant example, which we always do,

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

01:25:04 Min

Aufnahmedatum

2023-06-20

Hochgeladen am

2023-06-21 17:29:06

Sprache

en-US

Einbetten
Wordpress FAU Plugin
iFrame
Teilen